Quantized Gromov-Wasserstein

نویسندگان

چکیده

The Gromov-Wasserstein (GW) framework adapts ideas from optimal transport to allow for the comparison of probability distributions defined on different metric spaces. Scalable computation GW distances and associated matchings graphs point clouds have recently been made possible by state-of-the-art algorithms such as S-GWL MREC. Each these algorithmic breakthroughs relies decomposing underlying spaces into parts performing parts, adding recursion needed. While very successful in practice, theoretical guarantees methods are limited. Inspired recent advances theory quantization measure spaces, we define Quantized Gromov Wasserstein (qGW): a that treats fundamental objects fits hierarchy upper bounds problem. This formulation motivates new algorithm approximating which yields speedups reductions memory complexity. Consequently, able go beyond outperforming apply matching at scales an order magnitude larger than existing literature, including datasets containing over 1M points.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Quantized Gromov-hausdorff Distance

A quantized metric space is a matrix order unit space equipped with an operator space version of Rieffel’s Lip-norm. We develop for quantized metric spaces an operator space version of quantum Gromov-Hausdorff distance. We show that two quantized metric spaces are completely isometric if and only if their quantized Gromov-Hausdorff distance is zero. We establish a completeness theorem. As appli...

متن کامل

The Gromov-Wasserstein Distance: A Brief Overview

We recall the construction of the Gromov–Wasserstein distance and concentrate on quantitative aspects of the definition.

متن کامل

Gromov-Wasserstein Averaging of Kernel and Distance Matrices

This paper presents a new technique for computing the barycenter of a set of distance or kernel matrices. These matrices, which define the interrelationships between points sampled from individual domains, are not required to have the same size or to be in row-by-row correspondence. We compare these matrices using the softassign criterion, which measures the minimum distortion induced by a prob...

متن کامل

A spectral notion of Gromov–Wasserstein distance and related methods

Article history: Received 12 February 2010 Revised 7 August 2010 Accepted 16 September 2010 Available online 29 September 2010 Communicated by Mauro Maggioni

متن کامل

Wasserstein GAN

The problem this paper is concerned with is that of unsupervised learning. Mainly, what does it mean to learn a probability distribution? The classical answer to this is to learn a probability density. This is often done by defining a parametric family of densities (Pθ)θ∈Rd and finding the one that maximized the likelihood on our data: if we have real data examples {x}i=1, we would solve the pr...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture Notes in Computer Science

سال: 2021

ISSN: ['1611-3349', '0302-9743']

DOI: https://doi.org/10.1007/978-3-030-86523-8_49